perm filename VANCOU[F80,JMC]2 blob
sn#554810 filedate 1981-01-02 generic text, type C, neo UTF8
COMMENT ⊗ VALID 00002 PAGES
C REC PAGE DESCRIPTION
C00001 00001
C00002 00002 Vagueness for the practical man
C00008 ENDMK
C⊗;
Vagueness for the practical man
How can computers be vague
Vagueness for computer programs
Computers must use vague concepts
It has been said that the philosophical discussion of the meaning
of a term should not depend on first answering all related scientific
questions about the domain. Likewise equipping computer programs with
common sense should not depend on first answering all related
philosophical questions. Carrying out Turing's plan of designing
and educable child-program requires that the program be able to
accept new concepts without knowing their full meaning. Likewise,
a program should be able to use a term like "murder" without being
able to decide all cases. In fact it should not even have to know
about puzzling cases. Like a human its typical situation should be
that it knows of no cases that it cannot decide, but when one comes
up, it admits to being puzzled. However, the program cannot depend
on the programmer having anticipated all possible forms of puzzlement.
This lecture will discuss possible ways of formalizing such vague
concepts in first order logic.
Philsophical discussion of the meaning of the word "fish" need
not wait for the solution of all scientific problems of vertebrate
classification. Likewise programming a computer to use the word "thing"
should not depend on having solved all philosophical problems concertning
it. A human can use the word "thing" with no sense of puzzlement
until confronted with one of the conundrums philosophers have discovered.
Indeed a philosopher can suppose he has solve the conundrums and then
be surprised by a new one.
This paper discusses formal systems for using incomplete
and approximate concepts. Our goal is that a computer program should
be able to use an ambiguous concept without noticing the ambiguity
until a situation arises or is contemplated in which the ambiguity is
actual. The program should then have the same capacity as a human to
consider possible resolutions of the ambiguity and adopt one or split
the concept into several or abandon
the problem in puzzlement while continuing to use the concept in ordinary
cases.
%2De re - de dicto%1 puzzles are a case in point, though they
probably won't turn out to be the basic case. Suppose, for example,
that it has been declared a crime to "attempt to bribe a public official".
Are any of the following defenses valid?
%2"I didn't know he was a
public official".
"I mistakenly thought he was a public official,
but he wasn't".
"When I let it be known that I would pay α$5,000 to
any public official who would fix my drunk driving convinction, there
was no-one I was attempting to bribe, since there was no public official
who could fix the convinction"%1.
The point isn't to resolve these puzzles, but that humans can
use the notion of attempting to bribe a public official for years without
ever noticing them, can fail to resolve them, and go back to the ambiguous
notion for ordinary use.
The reason for conjecturing that this capability is needed for
artificial intelligence is the suspicion that there is no way of to
define a language that cannot suffer from these ambiguities. Moreover,
it seems that philosophers' attempts to resolve them by extending the
language are always subject to new ambiguities invented by other
philosophers.